16 research outputs found

    An Audit Logic for Accountability

    Get PDF
    We describe and implement a policy language. In our system, agents can distribute data along with usage policies in a decentralized architecture. Our language supports the specification of conditions and obligations, and also the possibility to refine policies. In our framework, the compliance with usage policies is not actively enforced. However, agents are accountable for their actions, and may be audited by an authority requiring justifications.Comment: To appear in Proceedings of IEEE Policy 200

    A versatile approach to combining trust values for making binary decisions

    No full text
    In open multi-agent systems, agents typically need to rely on others for the provision of information or the delivery of resources. However, since different agents’ capabilities, goals and intentions do not necessarily agree with each other, trust can not be taken for granted in the sense that an agent can not always be expected to be willing and able to perform optimally from a focal agent’s point of view. Instead, the focal agent has to form and update beliefs about other agents’ capabilities and intentions. Many different approaches, models and techniques have been used for this purpose in the past, which generate trust and reputation values. In this paper, employing one particularly popular trust model, we focus on the way an agent may use such trust values in trust-based decision-making about the value of a binary variable. We use computer simulation experiments to assess the relative efficacy of a variety of decision-making methods. In doing so, we argue for systematic analysis of such methods beforehand, so that, based on an investigation of characteristics of different methods, different classes of parameter settings can be distinguished. Whether, on average across many random problem instances, a certain method performs better or worse than alternatives is not the issue, given that the agent using the method always exists in a particular setting. We find that combining trust values using our likelihood method gives performance which is relatively robust to changes in the setting an agent may find herself in

    An introduction to the role based trust management framework RT

    No full text
    Trust Management (TM) is a novel flexible approach to access control in distributed systems, where the access control decisions are based on the policy statements, called credentials, made by different principals and stored in a distributed manner. In this chapter we present an introduction to TM focusing on the role-based trust-management framework RT. In particular, we focus on RT , the simplest representative of the RT family, and we describe in detail its syntax and semantics. We also present the solutions to the problem of credential discovery in distributed environments

    Point-based trust: Define how much privacy is worth

    No full text
    There has been much recent work on privacy-preserving access control negotiations, i.e., carrying out the negotiation in a manner that minimizes the disclosure of credentials and of access policies. This paper introduces the notion of point-based policies for access control and gives protocols for implementing them in a disclosure-minimizing fashion. Specifically, Bob values each credential with a certain number of points and requires a minimum total threshold of points before granting Alice access to a resource. In turn, Alice values each of her credentials with a privacy score that indicates her reluctance to reveal that credential. She is interested in achieving the required threshold for accessing the resource while minimizing the sum of the privacy scores of her used credentials. Bob’s valuation of credentials is private and should not be revealed, as is his threshold. Alice’s privacy-valuation of her credentials is also private and should not be revealed. What Alice uses is a subset of her credentials that achieves Bob’s required threshold for access, yet is of as small a value to her as possible. We give protocols for computing such a subset of Alice’s credentials without revealing any of the two parties ’ above-mentioned sensitive valuation functions and threshold numbers. A contribution of this paper that goes beyond the specific problem considered is a general method for recovering an optimal solution from any value-computing dynamic programming computation, while detecting cheating by the participants. Specifically, our traceback technique relies on the subset sum problem to force consistency. \u
    corecore